198 research outputs found
Localized activity profiles and storage capacity of rate-based autoassociative networks
We study analytically the effect of metrically structured connectivity on the
behavior of autoassociative networks. We focus on three simple rate-based model
neurons: threshold-linear, binary or smoothly saturating units. For a
connectivity which is short range enough the threshold-linear network shows
localized retrieval states. The saturating and binary models also exhibit
spatially modulated retrieval states if the highest activity level that they
can achieve is above the maximum activity of the units in the stored patterns.
In the zero quenched noise limit, we derive an analytical formula for the
critical value of the connectivity width below which one observes spatially
non-uniform retrieval states. Localization reduces storage capacity, but only
by a factor of 2~3. The approach that we present here is generic in the sense
that there are no specific assumptions on the single unit input-output function
nor on the exact connectivity structure.Comment: 4 pages, 4 figure
How informative are spatial CA3 representations established by the dentate gyrus?
In the mammalian hippocampus, the dentate gyrus (DG) is characterized by
sparse and powerful unidirectional projections to CA3 pyramidal cells, the
so-called mossy fibers. Mossy fiber synapses appear to duplicate, in terms of
the information they convey, what CA3 cells already receive from entorhinal
cortex layer II cells, which project both to the dentate gyrus and to CA3.
Computational models of episodic memory have hypothesized that the function of
the mossy fibers is to enforce a new, well separated pattern of activity onto
CA3 cells, to represent a new memory, prevailing over the interference produced
by the traces of older memories already stored on CA3 recurrent collateral
connections. Can this hypothesis apply also to spatial representations, as
described by recent neurophysiological recordings in rats? To address this
issue quantitatively, we estimate the amount of information DG can impart on a
new CA3 pattern of spatial activity, using both mathematical analysis and
computer simulations of a simplified model. We confirm that, also in the
spatial case, the observed sparse connectivity and level of activity are most
appropriate for driving memory storage and not to initiate retrieval.
Surprisingly, the model also indicates that even when DG codes just for space,
much of the information it passes on to CA3 acquires a non-spatial and episodic
character, akin to that of a random number generator. It is suggested that
further hippocampal processing is required to make full spatial use of DG
inputs.Comment: 19 pages, 11 figures, 1 table, submitte
Disappearance of Spurious States in Analog Associative Memories
We show that symmetric n-mixture states, when they exist, are almost never
stable in autoassociative networks with threshold-linear units. Only with a
binary coding scheme we could find a limited region of the parameter space in
which either 2-mixtures or 3-mixtures are stable attractors of the dynamics.Comment: 5 pages, 3 figures, accepted for publication in Phys Rev
Attractor neural networks storing multiple space representations: a model for hippocampal place fields
A recurrent neural network model storing multiple spatial maps, or
``charts'', is analyzed. A network of this type has been suggested as a model
for the origin of place cells in the hippocampus of rodents. The extremely
diluted and fully connected limits are studied, and the storage capacity and
the information capacity are found. The important parameters determining the
performance of the network are the sparsity of the spatial representations and
the degree of connectivity, as found already for the storage of individual
memory patterns in the general theory of auto-associative networks. Such
results suggest a quantitative parallel between theories of hippocampal
function in different animal species, such as primates (episodic memory) and
rodents (memory for space).Comment: 19 RevTeX pages, 8 pes figure
Chapter 19 Information coding in higher sensory and memory areas
This chapter discusses information coding in higher sensory and memory areas. Neurons are vastly simpler than human beings are, but the metaphor is not completely silly because it illustrates the volatility of the notion of neural codes. Information theory has been developed precisely to quantify communication and is quintessential to an appraisal of neural codes. Applying information theory to neural activity (rather than to the synthetic communication systems for which it was developed) is however riddled with practical problems and subtleties, which must be clarified before reporting experimental results. The chapter considers other means of neuronal communication than the emission of action potentials or spikes and regards them as self-similar all-or-none events whose only distinctive features are the time of emission and the identity of the emitting neuron. The extent to which the firing rates of a population of neurons may or may not carry most of the information represented in the complete list of spike emission times is a question to be addressed experimentally in any given situation
Grammatical Parameters from a Gene-like Code to Self-Organizing Attractors
Parametric approaches to grammatical diversity range from Chomsky's 1981
classical Principles & Parameters model to minimalist reinterpretations: in
some proposals of the latter framework, parameters need not be an extensional
list given at the initial state S0 of the mind, but can be constructed through
a bio-program in the course of language development. In this contribution we
pursue this lead and discuss initial data and ideas relevant for the
elaboration of three sets of questions:
1) how can binary parameters be conceivably implemented in cortical and
subcortical circuitry in the human brain?
2) how can parameter mutations be taken to occur?
3) given the distribution of parameter values across languages and their
implications, can multi-parental models of language phylogenies, departing from
ultrametricity, also account for some of the available evidence?Comment: 10 pages, 3 figures, 1 table. Contribution submitted to "A Cartesian
dream: A geometrical account of syntax. In honor of Andrea Moro". M. Greco
and D. Mocci, eds., Rivista di Grammatica Generativa/Research in Generative
Grammar. Lingbuzz Press ISSN 2531-593
Uninformative memories will prevail: the storage of correlated representations and its consequences
Autoassociative networks were proposed in the 80's as simplified models of
memory function in the brain, using recurrent connectivity with hebbian
plasticity to store patterns of neural activity that can be later recalled.
This type of computation has been suggested to take place in the CA3 region of
the hippocampus and at several levels in the cortex. One of the weaknesses of
these models is their apparent inability to store correlated patterns of
activity. We show, however, that a small and biologically plausible
modification in the `learning rule' (associating to each neuron a plasticity
threshold that reflects its popularity) enables the network to handle
correlations. We study the stability properties of the resulting memories (in
terms of their resistance to the damage of neurons or synapses), finding a
novel property of autoassociative networks: not all memories are equally
robust, and the most informative are also the most sensitive to damage. We
relate these results to category-specific effects in semantic memory patients,
where concepts related to `non-living things' are usually more resistant to
brain damage than those related to `living things', a phenomenon suspected to
be rooted in the correlation between representations of concepts in the cortex.Comment: 24 pages, 3 Figures. Submitted to HFSP Journal. New version has .EPS
figures. Now accepted in the HFSP Journal. New version includes deep
structural changes following reviewers suggestion
- …